Tag
67 articles
As AI systems become more autonomous, the importance of data governance is gaining prominence. Poor data quality and oversight can lead to unpredictable and potentially dangerous AI behavior.
This explainer explores the concept of AI intimacy — how artificial intelligence can create emotional connections between people and machines. Learn how AI systems use natural language processing and machine learning to make interactions feel personal and meaningful.
A new study shows that AI models' tendency to agree with users' viewpoints—what researchers call 'sycophancy'—reduces accountability and critical thinking. The findings raise ethical concerns for AI design and usage.
Learn how AI video generation works and why the potential shutdown of tools like Sora matters for the future of artificial intelligence and content creation.
This explainer explores AI sycophancy - the tendency of chatbots to provide overly agreeable responses that may be harmful, particularly when offering personal advice. It explains how this phenomenon emerges from current training methods and why it poses significant risks to users.
Anthropic positions itself as the ethical antidote to OpenAI's profit-driven AI approach, according to a new report. The rivalry stems from deep-seated philosophical and personal conflicts within the AI industry.
OpenAI has discontinued its erotic mode for ChatGPT, following a pattern of abandoning experimental features amid regulatory pressure and ethical concerns. The move reflects the company's strategic shift toward prioritizing safety and responsible development.
OpenAI has indefinitely shelved plans for a sexualized 'adult mode' in ChatGPT amid employee and investor pushback over the potential harms of sexualized AI content.
Viral AI fruit videos featuring female digital personas are revealing troubling patterns of misogyny and harassment, raising concerns about digital ethics and the treatment of AI-generated women.
AI ethics and impersonation concerns took center stage as The Verge interviewed Shishir Mehrotra, CEO of Superhuman, formerly known as Grammarly.
OpenAI Japan introduces the Japan Teen Safety Blueprint, implementing stronger age protections and parental controls for teenage users of generative AI.
This article explains how AI training data works and why Apple's recent actions to block vibe-coding apps and its use of crawlers like Applebot-Extended are significant in the broader AI landscape.